神经领域对3D视觉任务的成功现在是无可争议的。遵循这种趋势,已经提出了几种旨在进行视觉定位的方法(例如,大满贯)使用神经场估算距离或密度场。但是,很难仅通过基于密度字段的方法(例如神经辐射场(NERF))实现较高的定位性能,因为它们在大多数空区域中不提供密度梯度。另一方面,基于距离场的方法,例如神经隐式表面(NEU)在物体表面形状中具有局限性。本文提出了神经密度距离场(NEDDF),这是一种新颖的3D表示,可相互约束距离和密度场。我们将距离场公式扩展到没有明确边界表面的形状,例如皮毛或烟雾,从而可以从距离场到密度场进行显式转换。通过显式转换实现的一致距离和密度字段使稳健性可以符合初始值和高质量的注册。此外,字段之间的一致性允许从稀疏点云中快速收敛。实验表明,NEDDF可以实现较高的定位性能,同时在新型视图合成中提供可比的结果。该代码可在https://github.com/ueda0319/neddf上找到。
translated by 谷歌翻译
在目前的工作中,我们表明,公式驱动的监督学习(FDSL)的表现可以匹配甚至超过Imagenet-21K的表现,而无需在视觉预训练期间使用真实的图像,人类和自我选择变压器(VIT)。例如,在ImagEnet-21K上预先训练的VIT-BASE在ImagEnet-1K上进行微调时,在ImagEnet-1K和FDSL上进行微调时显示了81.8%的TOP-1精度,当在相同条件下进行预训练时(图像数量,数量,,图像数量,超参数和时期数)。公式产生的图像避免了隐私/版权问题,标记成本和错误以及真实图像遭受的偏见,因此具有巨大的预训练通用模型的潜力。为了了解合成图像的性能,我们测试了两个假设,即(i)对象轮廓是FDSL数据集中重要的,(ii)创建标签的参数数量增加会影响FDSL预训练的性能改善。为了检验以前的假设,我们构建了一个由简单对象轮廓组合组成的数据集。我们发现该数据集可以匹配分形的性能。对于后一种假设,我们发现增加训练任务的难度通常会导致更好的微调准确性。
translated by 谷歌翻译
The purpose of this study is to determine whether current video datasets have sufficient data for training very deep convolutional neural networks (CNNs) with spatio-temporal three-dimensional (3D) kernels. Recently, the performance levels of 3D CNNs in the field of action recognition have improved significantly. However, to date, conventional research has only explored relatively shallow 3D architectures. We examine the architectures of various 3D CNNs from relatively shallow to very deep ones on current video datasets. Based on the results of those experiments, the following conclusions could be obtained: (i) training resulted in significant overfitting for UCF-101, HMDB-51, and Ac-tivityNet but not for Kinetics. (ii) The Kinetics dataset has sufficient data for training of deep 3D CNNs, and enables training of up to 152 ResNets layers, interestingly similar to 2D ResNets on ImageNet. ResNeXt-101 achieved 78.4% average accuracy on the Kinetics test set. (iii) Kinetics pretrained simple 3D architectures outperforms complex 2D architectures, and the pretrained ResNeXt-101 achieved 94.5% and 70.2% on respectively. The use of 2D CNNs trained on ImageNet has produced significant progress in various tasks in image. We believe that using deep 3D CNNs together with Kinetics will retrace the successful history of 2D CNNs and ImageNet, and stimulate advances in computer vision for videos. The codes and pretrained models used in this study are publicly available1.
translated by 谷歌翻译
One of the challenges in the study of generative adversarial networks is the instability of its training. In this paper, we propose a novel weight normalization technique called spectral normalization to stabilize the training of the discriminator. Our new normalization technique is computationally light and easy to incorporate into existing implementations. We tested the efficacy of spectral normalization on CIFAR10, STL-10, and ILSVRC2012 dataset, and we experimentally confirmed that spectrally normalized GANs (SN-GANs) is capable of generating images of better or equal quality relative to the previous training stabilization techniques. The code with Chainer (Tokui et al., 2015), generated images and pretrained models are available at https://github.com/pfnet-research/sngan_ projection.
translated by 谷歌翻译